10. Support Vector Machine
Support Vector Machine
Support Vector Machine or "SVM" is just a funny name for a particular supervised machine learning algorithm that allows you to characterize the parameter space of your dataset into discrete classes. If you're new to machine learning, now might be a good time to check out Udacity's free Intro to Machine Learning Course, where SVMs and many other powerful algorithms are discussed in detail.
SVMs work by applying an iterative method to a training dataset, where each item in the training set is characterized by a feature vector and a label. In the image above, each point is characterized by just two features, A and B. The color of each point corresponds to its label, or which class of object it represents in the dataset.
Applying an SVM to this training set allows you to characterize the entire parameter space into discrete classes. The divisions between classes in parameter space are known as "decision boundaries", shown here by the colored polygons overlaid on the data. Having created decision boundaries means that when you're considering a new object for which you have features but no label, you can immediately assign it to a specific class. In other words, once you have trained your SVM, you can use it for the task of object recognition!
SVMs in Scikit-Learn
The Scikit-Learn or sklearn
package in Python offers a variety of SVM implementations to choose from. For our purposes we'll be using a basic SVM with a linear kernel because it tends to do a good job at classification and run faster than more complicated implementations, but I'd encourage you to check out the other possibilities in the sklearn.svm
package.
Training Data
Before we can train our SVM, we'll need a labeled dataset! To quickly generate some data, we'll use the cluster_gen()
function that we defined in the previous lesson on clustering for segmentation. Now, however, we'll have the function output labels for each of the cluster data points as well as the x and y positions (check out the generate_clusters.py
tab in the quiz below for details). You'll call it like this:
n_clusters = 5
clusters_x, clusters_y, labels = cluster_gen(n_clusters)
In this case, your features are the x and y positions of cluster points and the labels are just numbers associated with each cluster. To use these as training data, you need to convert to the format expected by sklearn.svm.SVC(), which is a feature set of shape (n_samples, m_features)
and labels of length n_samples
(in this case, n_samples
is the total number of cluster points and m_features
is 2). It's common in machine learning applications to call your feature set X
and your labels y
. Given the output format from cluster_gen()
you can create features and labels like this:
import numpy as np
X = np.float32((np.concatenate(clusters_x), np.concatenate(clusters_y))).transpose()
y = np.float32((np.concatenate(labels)))
Once you've got the training data sorted out, sklearn
makes it super easy to create and train your SVM!
from sklearn import svm
svc = svm.SVC(kernel='linear').fit(X, y)
And in the exercise below you can plot up the result! Explore what happens with different datasets. You can change the number in the np.random.seed(424)
statement to generate a different dataset. Check out the documentation for sklearn.svm.SVC() to see which parameters you can tweak and how your results vary.
Start Quiz:
import numpy as np
import matplotlib.pyplot as plt
from sklearn import svm
from generate_clusters import cluster_gen
np.random.seed(424) # Change the number to generate a different cluster.
n_clusters = 3
clusters_x, clusters_y, labels = cluster_gen(n_clusters)
# Convert to a training dataset in sklearn format
X = np.float32((np.concatenate(clusters_x), np.concatenate(clusters_y))).transpose()
y = np.float32((np.concatenate(labels)))
# Create an instance of SVM and fit the data.
ker = 'linear'
svc = svm.SVC(kernel=ker).fit(X, y)
# Create a mesh that we will use to colorfully plot the decision surface
# Plotting Routine courtesy of: http://scikit-learn.org/stable/auto_examples/svm/plot_iris.html#sphx-glr-auto-examples-svm-plot-iris-py
# Note: this coloring scheme breaks down at > 7 clusters or so
h = 0.2 # step size in the mesh
x_min, x_max = X[:, 0].min() - 1, X[:, 0].max() + 1 # -1 and +1 to add some margins
y_min, y_max = X[:, 1].min() - 1, X[:, 1].max() + 1
xx, yy = np.meshgrid(np.arange(x_min, x_max, h),
np.arange(y_min, y_max, h))
# Classify each block of the mesh (used to assign its color)
Z = svc.predict(np.c_[xx.ravel(), yy.ravel()])
# Put the result into a color plot
Z = Z.reshape(xx.shape)
plt.contourf(xx, yy, Z, cmap=plt.cm.coolwarm, alpha=0.8)
# Plot the training points
plt.scatter(X[:, 0], X[:, 1], c=y, cmap=plt.cm.coolwarm, edgecolors='black')
plt.xlim(xx.min(), xx.max())
plt.ylim(yy.min(), yy.max())
plt.xticks(())
plt.yticks(())
plt.title('SVC with '+ker+' kernel', fontsize=20)
import numpy as np
# Define a function to generate clusters
def cluster_gen(n_clusters, pts_minmax=(100, 500), x_mult=(2, 7), y_mult=(2, 7),
x_off=(0, 50), y_off=(0, 50)):
# n_clusters = number of clusters to generate
# pts_minmax = range of number of points per cluster
# x_mult = range of multiplier to modify the size of cluster in the x-direction
# y_mult = range of multiplier to modify the size of cluster in the y-direction
# x_off = range of cluster position offset in the x-direction
# y_off = range of cluster position offset in the y-direction
# Initialize some empty lists to receive cluster member positions
clusters_x = []
clusters_y = []
labels = []
# Generate random values given parameter ranges
n_points = np.random.randint(pts_minmax[0], pts_minmax[1], n_clusters)
x_multipliers = np.random.randint(x_mult[0], x_mult[1], n_clusters)
y_multipliers = np.random.randint(y_mult[0], y_mult[1], n_clusters)
x_offsets = np.random.randint(x_off[0], x_off[1], n_clusters)
y_offsets = np.random.randint(y_off[0], y_off[1], n_clusters)
# Generate random clusters given parameter values
for idx, npts in enumerate(n_points):
xpts = np.random.randn(npts) * x_multipliers[idx] + x_offsets[idx]
ypts = np.random.randn(npts) * y_multipliers[idx] + y_offsets[idx]
clusters_x.append(xpts)
clusters_y.append(ypts)
labels.append(np.zeros_like(xpts) + idx)
# Return cluster positions and labels
return clusters_x, clusters_y, labels